Journal article

Adversarial Coreset Selection for Efficient Robust Training

HM Dolatabadi, SM Erfani, C Leckie

International Journal of Computer Vision | SPRINGER | Published : 2023

Abstract

It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for ad..

View full abstract

Grants

Awarded by Association pour la Recherche sur le Cancer


Funding Acknowledgements

This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. Sarah Erfani is in part supported by Australian Research Council & nbsp;(ARC) Discovery Early Career Researcher Award & nbsp;(DECRA)& nbsp;DE220100680. Moreover, this research was partially supported by the ARC Centre of Excellence for Automated Decision-Making and Society & nbsp;(CE200100005), and funded partially by the Australian Government through the Australian Research Council.